advertisement

Monitoring violence online can cause real trauma for workers

Steve Stephens filmed himself shooting a Cleveland bystander, then confessed to the killing on Facebook Live. A man in Thailand broadcast himself killing his infant daughter on Facebook Live before killing himself. There have been other live-streamed suicides and rapes, many lingering on the popular platform for hours.

The first line of defense against those violent images spreading is, in most cases, a human being. After concern that Facebook doesn't move quickly enough to address violence broadcast on its platform, the company announced Wednesday that it will nearly double the number of moderators it pays to monitor for inappropriate content.

Soon, a team of 7,500 people - 3,000 of them newly hired - will monitor for possible violations of Facebook's rules. They join an undetermined number of workers around the world who are largely responsible for monitoring the worst that all of us produce across the web. These jobs are essential to our lives on the internet today, yet are rarely discussed publicly by the companies that rely on them most, Facebook included.

Moderators are sometimes employees at the companies that need them. But often, they're contractors. Perhaps they do this work for multiple companies at a time. They might live in the Philippines, or in another country; they might live in Seattle. Company to company, the moderators are called different names. Facebook relies on its users to report possible violations to its "community operations" team, which is then responsible for applying Facebook's broad, often vague, rules to each situation. Based on multiple reports of what these jobs are like across companies, moderators don't have long to make that decision, maybe seconds.

Moderation is usually discussed in terms of how companies like Facebook and Twitter can do better at keeping the worst of the internet off their platforms, and away from users. But the stories of the people who do it should raise another consideration: What is the worst of the internet doing to the people we need to contain it?

It's easy to imagine that viewing these images of sex and violence in rapid succession, day after day, would do something to the brains of the people employed to moderate. The thought was part of the reason that Sarah Roberts, assistant professor at the department of information studies at UCLA, began studying what she calls "commercial content moderation" seven years ago. She's interviewed about two dozen involved in moderation over the years, and many employed by tech companies to do this work are bound by nondisclosure agreements. Their work is treated like a trade secret.

"Some people report to me that they have little ill effects, and then in the next breath say, 'I've been drinking a lot,'" she said. She's also heard former moderators say that "certain types of material that disturbs them, for lack of a better word, triggering."

Others, she noted, seem to fare better. Even then, though, it's worth noting there's relatively little research on the effects this work may have on a person over time. And the companies that employ these moderators generally aren't in the habit of granting access to independent researchers.

When Adrian Chen (who recently released a documentary on this topic) observed moderators for Whisper at work in the Philippines a few years ago, he described what he saw across one worker's screen in just 25 minutes as "an impressive variety of d - pics, thong shots, exotic objects inserted into bodies, hateful taunts, and requests for oral sex."

Moderators can and do see images of violence and death, rape and child exploitation over the course of their workday. In the end, broadly, one of two things happens to the people who do this work, Roberts said. "Ether they burn out from what is essentially the trauma of it, the upsetting nature, or they become desensitized," she said.

The worst of these images are not always lost in the blur. One former moderator who worked for YouTube in its first years told the Verge about pulling up a video with "staggeringly violent images" involving a "toddler and a dimly lit hotel room." She saw many brutal images during her time as a moderator, but a decade later, that one still stuck with her.

There are other professions - law enforcement and journalism, for instance - where repeated exposure to violent or explicit imagery is part of the job. But Brown cautioned against those direct comparisons. Working as a moderator doesn't frequently come with the benefits of working in a "professional class," she said. They are often workers without benefits, on an hourly wage. Even moderators with relatively stable employment situations can struggle to access the resources they need to cope with the stress of their jobs.

Greg Blauert and Henry Soto worked for Microsoft's "online safety team," which is part of the company's customer service department. Late last year, they sued Microsoft, saying their jobs gave them post-traumatic stress disorder. The complaint accuses Microsoft of providing inadequate information on the dangers of the job ahead of time, and of neglecting to provide sufficient counseling to address the effects of doing this work.

"Mr. Soto was required to view many thousands of photographs and video of the most horrible, inhumane, and disgusting content one can imagine," the complaint reads. "In fact, many people simply cannot imagine what Mr. Soto had to view on a daily basis as most people do not understand how horrible and inhumane the rest of the world can be."

His symptoms, the complaint says, "included panic attacks in public, disassociation, depression, visual hallucinations, and an inability to be around computers or young children, including, at times, his own son, because it would trigger memories of horribly violent acts against children that he had witnessed."

Microsoft disagrees with their story, and told the Guardian that the company "takes seriously its responsibility to remove and report imagery of child sexual exploitation and abuse being shared on its services, as well as the health and resiliency of the employees who do this important work."

It might seem tempting to ask why this all just can't just be done by computers. Although Facebook has hinted that it would like to have technology play a bigger role in moderating its content, Roberts notes that the fact they just nearly doubled their human moderation staff indicates that an automated solution just isn't feasible right now. Moderating the Internet, Roberts said, "needs humans." The job of screening out the most inhumane content on the Internet seems to require a uniquely human understanding of nuance, complexity, and context.

Article Comments
Guidelines: Keep it civil and on topic; no profanity, vulgarity, slurs or personal attacks. People who harass others or joke about tragedies will be blocked. If a comment violates these standards or our terms of service, click the "flag" link in the lower-right corner of the comment box. To find our more, read our FAQ.